Search results for "COORDINATE DESCENT"

showing 8 items of 8 documents

Regularized Regression Incorporating Network Information: Simultaneous Estimation of Covariate Coefficients and Connection Signs

2014

We develop an algorithm that incorporates network information into regression settings. It simultaneously estimates the covariate coefficients and the signs of the network connections (i.e. whether the connections are of an activating or of a repressing type). For the coefficient estimation steps an additional penalty is set on top of the lasso penalty, similarly to Li and Li (2008). We develop a fast implementation for the new method based on coordinate descent. Furthermore, we show how the new methods can be applied to time-to-event data. The new method yields good results in simulation studies concerning sensitivity and specificity of non-zero covariate coefficients, estimation of networ…

Clustering high-dimensional databusiness.industryjel:C41jel:C13Machine learningcomputer.software_genreRegressionhigh-dimensional data gene expression data pathway information penalized regressionConnection (mathematics)Set (abstract data type)Lasso (statistics)CovariateArtificial intelligenceSensitivity (control systems)businessCoordinate descentAlgorithmcomputerMathematics
researchProduct

Differential geometric least angle regression: a differential geometric approach to sparse generalized linear models

2013

Summary Sparsity is an essential feature of many contemporary data problems. Remote sensing, various forms of automated screening and other high throughput measurement devices collect a large amount of information, typically about few independent statistical subjects or units. In certain cases it is reasonable to assume that the underlying process generating the data is itself sparse, in the sense that only a few of the measured variables are involved in the process. We propose an explicit method of monotonically decreasing sparsity for outcomes that can be modelled by an exponential family. In our approach we generalize the equiangular condition in a generalized linear model. Although the …

Statistics and ProbabilityGeneralized linear modelSparse modelMathematical optimizationGeneralized linear modelsVariable selectionPath following algorithmEquiangular polygonGeneralized linear modelLASSODANTZIG SELECTORsymbols.namesakeExponential familyLasso (statistics)Sparse modelsDifferential geometryInformation geometryCOORDINATE DESCENTFisher informationERRORMathematicsLeast-angle regressionLeast angle regressionGeneralized degrees of freedomsymbolsSHRINKAGEStatistics Probability and UncertaintySimple linear regressionInformation geometrySettore SECS-S/01 - StatisticaAlgorithmCovariance penalty theory
researchProduct

Estimation of sparse generalized linear models: the dglars package

2013

dglars is a public available R package that implements the method proposed in Augugliaro, Mineo and Wit (2013) developed to study the sparse structure of a generalized linear model. This method, called dgLARS, is based on a differential geometrical extension of the least angle regression method (LARS). The core of the dglars package consists of two algorithms implemented in Fortran 90 to efficiently compute the solution curve; specifically a predictor-corrector algorithm and a cyclic coordinate descent algorithm.

generalized linear models dgLARS predictor-corrector algorithm cyclic coordinate descent algorithm sparse models variable selectionSettore SECS-S/01 - Statistica
researchProduct

Cyclic coordinate for penalized Gaussian graphical models with symmetry restriction

2014

In this paper we propose two efficient cyclic coordinate algorithms to estimate structured concentration matrix in penalized Gaussian graphical models. Symmetry restrictions on the concentration matrix are particularly useful to reduce the number of parameters to be estimated and to create specific structured graphs. The penalized Gaussian graphical models are suitable for high-dimensional data.

Factorial dynamic Gaussian graphical models Gaussian graphical models graphical lasso cyclic coordinate descent methodsSettore SECS-S/01 - Statistica
researchProduct

Optimisation non-lisse pour l'estimation de composants immunitaires cellulaires dans un environnement tumoral

2021

In this PhD proposal we will investigate new regularization methods of inverse problems that provide an absolute quantification of immune cell subpopulations. The mathematical aspect of this PhD proposal is two-fold. The first goal is to enhance the underlying linear model through a more refined construction of the expression matrix. The second goal is, given this linear model, to derive the best possible estimator. These two issues can be treated in a decoupled way, which is the standard for existing methods such as Cibersort, or as a coupled optimization problem (which is known as blind deconvolution in signal processing).

Coordinate descentProblème inverse[INFO.INFO-OH]Computer Science [cs]/Other [cs.OH]Automatic differentiationBiomedical applicationHyperparameters selectionOptimisation non-LisseÉlection de paramètresDifférentiation automatique[INFO.INFO-OH] Computer Science [cs]/Other [cs.OH]Descente de coordonnéesInverse problemApplication biomédicaleNon-Smooth optimization
researchProduct

Sparse nonnegative tensor decomposition using proximal algorithm and inexact block coordinate descent scheme

2021

Nonnegative tensor decomposition is a versatile tool for multiway data analysis, by which the extracted components are nonnegative and usually sparse. Nevertheless, the sparsity is only a side effect and cannot be explicitly controlled without additional regularization. In this paper, we investigated the nonnegative CANDECOMP/PARAFAC (NCP) decomposition with the sparse regularization item using l1-norm (sparse NCP). When high sparsity is imposed, the factor matrices will contain more zero components and will not be of full column rank. Thus, the sparse NCP is prone to rank deficiency, and the algorithms of sparse NCP may not converge. In this paper, we proposed a novel model of sparse NCP w…

tensor decompositionsignaalinkäsittelyproximal algorithmalgoritmitMathematicsofComputing_NUMERICALANALYSISinexact block coordinate descentsparse regularizationnonnegative CANDECOMP/PARAFAC decomposition
researchProduct

dglars: An R Package to Estimate Sparse Generalized Linear Models

2014

dglars is a publicly available R package that implements the method proposed in Augugliaro, Mineo, and Wit (2013), developed to study the sparse structure of a generalized linear model. This method, called dgLARS, is based on a differential geometrical extension of the least angle regression method proposed in Efron, Hastie, Johnstone, and Tibshirani (2004). The core of the dglars package consists of two algorithms implemented in Fortran 90 to efficiently compute the solution curve: a predictor-corrector algorithm, proposed in Augugliaro et al. (2013), and a cyclic coordinate descent algorithm, proposed in Augugliaro, Mineo, and Wit (2012). The latter algorithm, as shown here, is significan…

Statistics and ProbabilityGeneralized linear modelEXPRESSIONMathematical optimizationTISSUESFortrancyclic coordinate descent algorithmdgLARSFeature selectionDANTZIG SELECTORpredictor-corrector algorithmLIKELIHOODLEAST ANGLE REGRESSIONsparse modelsDifferential (infinitesimal)differential geometrylcsh:Statisticslcsh:HA1-4737computer.programming_languageMathematicsLeast-angle regressionExtension (predicate logic)Expression (computer science)generalized linear modelsBREAST-CANCER RISKVARIABLE SELECTIONDifferential geometrydifferential geometry generalized linear models dgLARS predictor-corrector algorithm cyclic coordinate descent algorithm sparse models variable selection.MARKERSHRINKAGEStatistics Probability and UncertaintyHAPLOTYPESSettore SECS-S/01 - StatisticacomputerAlgorithmSoftware
researchProduct

Differential geometric LARS via cyclic coordinate descent method

2012

We address the problem of how to compute the coefficient path implicitly defined by the differential geometric LARS (dgLARS) method in a high-dimensional setting. Although the geometrical theory developed to define the dgLARS method does not need of the definition of a penalty function, we show that it is possible to develop a cyclic coordinate descent algorithm to compute the solution curve in a high-dimensional setting. Simulation studies show that the proposed algorithm is significantly faster than the prediction-corrector algorithm originally developed to compute the dgLARS solution curve.

Cyclic coordinate descent method Differential geometry dgLARS Generalized linear models LARS Sparse models Variable selectionSettore SECS-S/01 - Statistica
researchProduct